Project - Computer Vision - 1

by HARI SAMYNAATH S

DOMAIN: Botanical research
• CONTEXT: University X is currently undergoing some research involving understanding the characteristics of plant and plant seedlings at various stages of growth. They already have have invested on curating sample images. They require an automation which can create a classifier capable of determining a plant's species from a photo
• DATA DESCRIPTION: The dataset comprises of images from 12 plant species. Source: https://www.kaggle.com/c/plant-seedlings-classification/data
• PROJECT OBJECTIVE: University’s management require an automation which can create a classi ier capable of determining a plant's species from a photo

Steps and tasks:

  1. Import the data. Analyse the dimensions of the data. Visualise the data.

lets review the image dimensions in the train data

Lets visualise 5 images randomly from each

Steps and tasks:

  1. Design, train tune and test the best AIML image classifier model using:
    • Supervised learning algorithms
    • Neural networks
    • CNN
#Clear any logs from previous runs try: !rm -rf ./logs/ os.mkdir('logs') except: os.mkdir('logs') fpath = "/media/harinaathan/Barracuda/HARI SAMYNAATH S/Anaconda_workspace/GLAIML_course/09 - ComputerVision/3 - week Project/logs/" file = open(fpath+"CNN_tuning.har",'wb') # reset log file file.close() #above file creation & log reset to be commented after setting up a tuning framework

Transfer Learning from Resnet50

Trusting in the capabilities of Resnet50, lets perform a transfer learning from the Resnet50 CNN
later lets build customized Neural Networks & a Machine learning algorithm, in an attempt to acheive the accuracy of CNN based transfer learning technique.

lets functionally connect the resnet output to a series of Dense & BatchNormalisation layers before connecting to a softmax layer via a Dropout
also lets take a parallel stream of a slightly different heirarchy and concat both the outputs before the softmax
to choose the best combination of architecture, lets tryout few epochs
we would also combine the hyperparameter search on the go

halt here for_ this error do not_ re-run the cell below the hypertuning is complete for 300 search instances

only refineCNN_002 with grid[9] sustained the iterations
lets try to unfreeze the resnet layers and relearn the model

the unfreezeCNN_001 has achieved remarkable 0.92 accuracy
lets put it to prediction of the given image

the Resnet50 based CNN have predicted the test image correctly

Artificial Neural Networks model

lets build a few neural network modelling functions that will build, learn, evaluate

file = open(fpath+"NN_tuning.har",'wb') # reset log file file.close() # above file not re-created after setting up a good hypertuning framework # hence above line commented

though the predicted class is correct,
even the softmax output is not so peaky at the desired output class
this is direct effect of the poor accuracy of trained model

ML model

lets attempt a basic ML model

thats extreme overfitting model

as expected from the accuracy scores, it has predicted incorrectly

that's pretty close to the desired result
but the result is mere fluke as the accuracy scores suggests

not much of a good result

very much similar results

Steps and tasks:

  1. Compare the results from the above step along with your detailed observations on the best performing algorithm and why/how it outperforms other algorithms in this case.
  2. Pickle the best performing model.
  3. Import the the test image [ from the “ Prediction” folder ] to predict the class. Display the image. Use the best trained image classi ier model to predict the class.
  1. comparison and discussion to be found below
  2. the best models were pickled in respective sections
  3. the prediction shall be refered in the respective sections and teh best prediction will be repeated below

it is evident that the CNN model outperforms all models
this is advantage of the following key points
1) imagenet learned weights - transfer learning
2) convolutions has more sense of spatial corelation
3) this resnet50 is 175 layer deep by itself